1,812 research outputs found

    Credibility in Empirical Legal Analysis

    Get PDF
    Empirical analysis is central in both legal scholarship and litigation, but it is not credible. Researchers can manipulate data to arrive at any conclusion they wish to obtain. A practice known as data fishing—searching for and selectively reporting methods and results that are favorable to the researcher—entirely invalidates a study’s results by giving rise to false positives and false impressions. Nevertheless, it is prevalent in law, leading to false claims, incorrect verdicts, and destructive policy. In this article, I examine the harm that data fishing in empirical legal research causes. I then build on methods in the sciences to develop a framework for eliminating data fishing and restoring confidence in empirical analysis in legal scholarship and litigation. This framework—which I call DASS (an acronym for Design, Analyze, Scrutinize, and Substantiate)—is designed to be simple, flexible, and practical for application in legal settings. It provides a concrete method for researchers to use to safeguard against data fishing and for consumers of empirical analysis to use to evaluate a researcher’s empirical claims. Finally, after describing the DASS framework and its application in various legal settings, I consider its implications for the “hired-gun” problem and other difficulties related to the reliability of expert evidence

    An Objective-Chance Exception to the Rule against Character Evidence

    Get PDF
    A central principle of U.S. law is that individuals should be judged in court based on their actions and not on their character. Rule 404 of the Federal Rules of Evidence therefore prohibits evidence of an individual’s previous acts to prove that the individual acted in accordance with a certain character or propensity. But courts regularly deviate from or altogether ignore this rule, resulting in arbitrariness and judgments based on an individual\u27s prior acts rather than on evidence regarding the events at issue in a case.In this Article, I argue that at the center of the unpredictability surrounding the rule against character evidence is a type of evidence that I refer to as “objective-chance evidence”--that is, evidence regarding other events of the same general kind as the event in question, offered to show that the event in question is due to some intent or design rather than to accident or chance. I apply simple scientific principles of information aggregation to examine the nature of objective-chance evidence in the courts and literature. I then argue that central to a more logical and effective approach to character evidence are 1) a proper understanding of objective-chance evidence as a particular category of character evidence, and 2) an “objective-chance exception” that replaces the rule against character evidence with a Rule 403 balancing for objective-chance evidence. I show that these conditions may permit a more coherent interpretation of Rule 404 and ultimately a stricter adherence to the rule against character evidence

    Character Evidence as a Conduit for Implicit Bias

    Get PDF
    The Federal Rules of Evidence purport to prohibit character evidence, or evidence regarding a defendant’s past bad acts or propensities offered to suggest that the defendant acted in accordance with a certain character trait on the occasion in question. However, courts regularly admit character evidence through an expanding set of legislative and judicial exceptions that have all but swallowed the rule. In the usual narrative, character evidence is problematic because jurors place excessive weight on it or punish the defendant for past behavior. Lawmakers rely on this narrative when they create exceptions. However, this account arguably misses a highly troublesome feature of character evidence and far understates its pernicious effects. In this Article, I develop a new model of character evidence that refocuses the debate on the distortions associated with the prior beliefs and prejudices inherent in a juror’s perception of character evidence. Specifically, I draw on disciplines outside of law — including Bayesian statistics and cognitive psychology — to explain how jurors use character evidence to arrive at a verdict. I then apply this framework to show that when a court admits character evidence through exceptions, judgments based on character evidence are inherently biased against certain groups of people based on their race, sex, appearance, accent, education, economic status, and other personal characteristics. I argue that exceptions to the rule against character evidence therefore drive inequality in the U.S. legal system, and that this provides a strong reason to limit such exceptions and to reverse the current trend toward a more permissive rule

    Causation in Civil Rights Legislation

    Get PDF
    Employees are often left unprotected from discrimination because they are unable to satisfy the requirement of causation. Courts have made clear that to obtain legal redress for discrimination, it is generally insufficient to show that a protected characteristic such as race or sex was a “motivating factor” of an adverse employment decision. Rather, under Supreme Court precedent—including the Court’s Comcast and Babb decisions in the 2020 term—the antidiscrimination statutes generally require a showing of “but-for” causation. Consequently, many victims of discrimination will be unable to prevail because an employer can readily refute allegations of discrimination by asserting a legitimate purpose—true or not— for the adverse decision. Therefore, although there is good reason to reject the motivating-factor test, the but-for requirement undermines the objectives of antidiscrimination law. In this Article, I draw on notions of cause and effect in the sciences and in tort law to propose a new standard of causation for antidiscrimination law. In particular, I formulate a simple test—which I call the “fortified NESS” test, or “FNESS”—for courts and legislatures to apply as a uniform and effective standard of causation in all disparate-treatment cases. I then employ this formulation to propose concrete amendments to the civil rights statutes, and I demonstrate why these amendments are necessary and how they allow courts to uphold the critical aims of antidiscrimination law

    The Effects of Comparable‐Case Guidance on Awards for Pain and Suffering and Punitive Damages: Evidence from a Randomized Controlled Trial

    Get PDF
    Damage awards for pain and suffering and punitive damages are notoriously unpredictable. Courts provide minimal, if any, guidance to jurors determining these awards, and apply similarly minimal standards in reviewing them. Lawmakers have enacted crude measures, such as damage caps, aimed at curbing award unpredictability, while ignoring less drastic alternatives that involve guiding jurors with information regarding damage awards in comparable cases (“comparable‐case guidance” or “prior‐award information”). The primary objections to the latter approach are based on the argument that, because prior‐award information uses information regarding awards in distinct cases, it introduces the possibility of biasing the award, or distorting the award size, even if prioraward information reduces the variability of awards. This paper responds to these objections. It reports and interprets the results of a large randomized controlled trial designed to test juror behavior in response to prior‐award information and, specifically, to examine the effects of prioraward information on both variability and bias under a range of conditions related to the foregoing objections. We conclude that there is strong evidence that prior‐award information improves the “accuracy” of awards—that it significantly reduces the variability of awards, and that any introduction of bias, or distortion of award size, is minor relative to its beneficial effect on variability. Furthermore, we conclude that there is evidence that jurors respond to prior‐award information as predicted in recent literature, and in line with the “optimal” use of such information; and that prior‐award information may cause jurors to approach award determinations more thoughtfully or analytically
    • 

    corecore